98 research outputs found

    Leaf: Modularity for Temporary Sharing in Separation Logic (Extended Version)

    Full text link
    In concurrent verification, separation logic provides a strong story for handling both resources that are owned exclusively and resources that are shared persistently (i.e., forever). However, the situation is more complicated for temporarily shared state, where state might be shared and then later reclaimed as exclusive. We believe that a framework for temporarily-shared state should meet two key goals not adequately met by existing techniques. One, it should allow and encourage users to verify new sharing strategies. Two, it should provide an abstraction where users manipulate shared state in a way agnostic to the means with which it is shared. We present Leaf, a library in the Iris separation logic which accomplishes both of these goals by introducing a novel operator, which we call guarding, that allows one proposition to represent a shared version of another. We demonstrate that Leaf meets these two goals through a modular case study: we verify a reader-writer lock that supports shared state, and a hash table built on top of it that uses shared state

    Algebraic Reductions of Knowledge

    Get PDF
    We introduce reductions of knowledge, a generalization of arguments of knowledge, which reduce checking knowledge of a witness in one relation to checking knowledge of a witness in another (simpler) relation. Reductions of knowledge unify a growing class of modern techniques as well as provide a compositional framework to modularly reason about individual steps in complex arguments of knowledge. As a demonstration, we simplify and unify recursive arguments over linear algebraic statements by decomposing them as a sequence of reductions of knowledge. To do so, we develop the tensor reduction of knowledge, which generalizes the central reductive step common to many recursive arguments. Underlying the tensor reduction of knowledge is a new information-theoretic reduction, which, for any modules UU, U1U_1, and U2U_2 such that U≅U1⊗U2U \cong U_1 \otimes U_2, reduces the task of evaluating a homomorphism in UU to evaluating a homomorphism in U1U_1 and evaluating a homomorphism in U2U_2

    Hash First, Argue Later: Adaptive Verifiable Computations on Outsourced Data

    Get PDF
    Proof systems for verifiable computation (VC) have the potential to make cloud outsourcing more trustworthy. Recent schemes enable a verifier with limited resources to delegate large computations and verify their outcome based on succinct arguments: verification complexity is linear in the size of the inputs and outputs (not the size of the computation). However, cloud computing also often involves large amounts of data, which may exceed the local storage and I/O capabilities of the verifier, and thus limit the use of VC. In this paper, we investigate multi-relation hash & prove schemes for verifiable computations that operate on succinct data hashes. Hence, the verifier delegates both storage and computation to an untrusted worker. She uploads data and keeps hashes; exchanges hashes with other parties; verifies arguments that consume and produce hashes; and selectively downloads the actual data she needs to access. Existing instantiations that fit our definition either target restricted classes of computations or employ relatively inefficient techniques. Instead, we propose efficient constructions that lift classes of existing arguments schemes for fixed relations to multi-relation hash & prove schemes. Our schemes (1) rely on hash algorithms that run linearly in the size of the input; (2) enable constant-time verification of arguments on hashed inputs; (3) incur minimal overhead for the prover. Their main benefit is to amortize the linear cost for the verifier across all relations with shared I/O. Concretely, compared to solutions that can be obtained from prior work, our new hash & prove constructions yield a 1,400x speed-up for provers. We also explain how to further reduce the linear verification costs by partially outsourcing the hash computation itself, obtaining a 480x speed-up when applied to existing VC schemes, even on single-relation executions

    How to run POSIX apps in a minimal picoprocess

    Get PDF
    Abstract We envision a future where Web, mobile, and desktop applications are delivered as isolated, complete software stacks to a minimal, secure client host. This shift imbues app vendors with full autonomy to maintain their apps' integrity. Achieving this goal requires shifting complexity out of the client platform and replacing the required behavior inside the vendors' isolated apps. We ported rich, interactive POSIX apps, such as Gimp and Inkscape, to a spartan host platform. We describe this effort in sufficient detail to support reproducibility

    How to run POSIX apps in a minimal picoprocess

    Get PDF
    Abstract We envision a future where Web, mobile, and desktop applications are delivered as isolated, complete software stacks to a minimal, secure client host. This shift imbues app vendors with full autonomy to maintain their apps' integrity. Achieving this goal requires shifting complex behavior out of the client platform and into the vendors' isolated apps. We ported rich, interactive POSIX apps, such as Gimp and Inkscape, to a spartan host platform. We describe this effort in sufficient detail to support reproducibility

    How to run POSIX apps in a minimal picoprocess

    Get PDF
    Abstract We envision a future where Web, mobile, and desktop applications are delivered as isolated, complete software stacks to a minimal, secure client host. This shift imbues app vendors with full autonomy to maintain their apps' integrity. Achieving this goal requires shifting complex behavior out of the client platform and into the vendors' isolated apps. We ported rich, interactive POSIX apps, such as Gimp and Inkscape, to a spartan host platform. We describe this effort in sufficient detail to support reproducibility

    Transparency Dictionaries with Succinct Proofs of Correct Operation

    Get PDF
    This paper introduces Verdict, a transparency dictionary, where an untrusted service maintains a label-value map that clients can query and update (foundational infrastructure for end-to-end encryption and other applications). To prevent unauthorized modifications to the dictionary, for example, by a malicious or a compromised service provider, Verdict produces publicly-verifiable cryptographic proofs that it correctly executes both reads and authorized updates. A key advance over prior work is that Verdict produces efficiently-verifiable proofs while incurring modest proving overheads. Verdict accomplishes this by composing indexed Merkle trees (a new SNARK-friendly data structure) with Phalanx (a new SNARK that supports amortized constant-sized proofs and leverages particular workload characteristics to speed up the prover). Our experimental evaluation demonstrates that Verdict scales to dictionaries with millions of labels while imposing modest overheads on the service and clients
    • …
    corecore